Goto

Collaborating Authors

 Insurance


AD-D ROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning Tao Yang

Neural Information Processing Systems

Fine-tuning large pre-trained language models on downstream tasks is apt to suffer from overfitting when limited training data is available. While dropout proves to be an effective antidote by randomly dropping a proportion of units, existing research has not examined its effect on the self-attention mechanism. In this paper, we investigate this problem through self-attention attribution and find that dropping attention positions with low attribution scores can accelerate training and increase the risk of overfitting.


Large Scale Transfer Learning for Tabular Data via Language Modeling Josh Gardner, Juan C. Perdomo # Ludwig Schmidt

Neural Information Processing Systems

Tabular data - structured, heterogeneous, spreadsheet-style data with rows and columns - is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain.


Trenton Chang 1 Lindsay Warrenburg

Neural Information Processing Systems

In many settings, machine learning models may be used to inform decisions that impact individuals or entities who interact with the model. Such entities, or agents, may game model decisions by manipulating their inputs to the model to obtain better outcomes and maximize some utility. We consider a multi-agent setting where the goal is to identify the "worst offenders:" agents that are gaming most aggressively. However, identifying such agents is difficult without being able to evaluate their utility function. Thus, we introduce a framework featuring a gaming deterrence parameter, a scalar that quantifies an agent's (un)willingness to game. We show that this gaming parameter is only partially identifiable. By recasting the problem as a causal effect estimation problem where different agents represent different "treatments," we prove that a ranking of all agents by their gaming parameters is identifiable. We present empirical results in a synthetic data study validating the usage of causal effect estimation for gaming detection and show in a case study of diagnosis coding behavior in the U.S. that our approach highlights features associated with gaming.


A More Backgrounds

Neural Information Processing Systems

A.1 Distributional RL Distributional RL [2, 3, 8] is an area of RL that considers the distribution of the cumulative return Z In this paper, we estimate the quantiles of the cumulative sum cost using the quantile loss, and use them to solve the constrained optimization problem (QuantCP). A.3 The Considered Constrained Problems In this subsection, we list the problems for constrained RL. The first constrained problem is a common problem used in many previous constrained RL papers. Note that the CVaR and the quantile are two different measures for undesirable events, and the choice between the two depends on what we desire. For example, an insurance company prefers the CVaR of undesirable events to determine an insurance premium.


A new law in this state bans automated insurance claim denials

FOX News

'Ask Dr. Drew' host Dr. Drew Pinsky breaks down key takeaways from the MAHA Commission's chronic disease report on'The Ingraham Angle.' As some health insurance companies have come under fire for allegedly using computer systems to shoot down claims, an Arizona law will soon make the practice illegal in the Grand Canyon State. Republican Arizona House Majority Whip Rep. Julie Willoughby sponsored the legislation, and it was recently signed into law by Democratic Gov. Katie Hobbs. House Bill 2175 requires a physician licensed in the state to conduct an "individual review" and use "independent medical judgment" to determine whether the claim should actually be denied. It also required a similar review of "a direct denial of a prior authorization of a service" that a provider asked for and "involves medical necessity."


Zero-shot causal learning

Neural Information Processing Systems

Predicting how different interventions will causally affect a specific individual is important in a variety of domains such as personalized medicine, public policy, and online marketing. There are a large number of methods to predict the effect of an existing intervention based on historical data from individuals who received it. However, in many settings it is important to predict the effects of novel interventions (e.g., a newly invented drug), which these methods do not address. Here, we consider zero-shot causal learning: predicting the personalized effects of a novel intervention. We propose CaML, a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task. CaML trains a single meta-model across thousands of tasks, each constructed by sampling an intervention, its recipients, and its nonrecipients. By leveraging both intervention information (e.g., a drug's attributes) and individual features (e.g., a patient's history), CaML is able to predict the personalized effects of novel interventions that do not exist at the time of training. Experimental results on real world datasets in large-scale medical claims and cell-line perturbations demonstrate the effectiveness of our approach. Most strikingly, CaML's zero-shot predictions outperform even strong baselines trained directly on data from the test interventions.




Large Scale Transfer Learning for Tabular Data via Language Modeling Josh Gardner, Juan C. Perdomo # Ludwig Schmidt

Neural Information Processing Systems

Tabular data - structured, heterogeneous, spreadsheet-style data with rows and columns - is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain.


Discrimination-free Insurance Pricing with Privatized Sensitive Attributes

arXiv.org Machine Learning

Fairness has emerged as a critical consideration in the landscape of machine learning algorithms, particularly as AI continues to transform decision-making across societal domains. To ensure that these algorithms are free from bias and do not discriminate against individuals based on sensitive attributes such as gender and race, the field of algorithmic bias has introduced various fairness concepts, along with methodologies to achieve these notions in different contexts. Despite the rapid advancement, not all sectors have embraced these fairness principles to the same extent. One specific sector that merits attention in this regard is insurance. Within the realm of insurance pricing, fairness is defined through a distinct and specialized framework. Consequently, achieving fairness according to established notions does not automatically ensure fair pricing in insurance. In particular, regulators are increasingly emphasizing transparency in pricing algorithms and imposing constraints on insurance companies on the collection and utilization of sensitive consumer attributes. These factors present additional challenges in the implementation of fairness in pricing algorithms. To address these complexities and comply with regulatory demands, we propose an efficient method for constructing fair models that are tailored to the insurance domain, using only privatized sensitive attributes. Notably, our approach ensures statistical guarantees, does not require direct access to sensitive attributes, and adapts to varying transparency requirements, addressing regulatory demands while ensuring fairness in insurance pricing.